1,741 research outputs found

    Vacuum Stability of the wrong sign (−ϕ6)(-\phi^{6}) Scalar Field Theory

    Full text link
    We apply the effective potential method to study the vacuum stability of the bounded from above (−ϕ6)(-\phi^{6}) (unstable) quantum field potential. The stability (∂E/∂b=0)\partial E/\partial b=0) and the mass renormalization (∂2E/∂b2=M2)\partial^{2} E/\partial b^{2}=M^{2}) conditions force the effective potential of this theory to be bounded from below (stable). Since bounded from below potentials are always associated with localized wave functions, the algorithm we use replaces the boundary condition applied to the wave functions in the complex contour method by two stability conditions on the effective potential obtained. To test the validity of our calculations, we show that our variational predictions can reproduce exactly the results in the literature for the PT\mathcal{PT}-symmetric ϕ4\phi^{4} theory. We then extend the applications of the algorithm to the unstudied stability problem of the bounded from above (−ϕ6)(-\phi^{6}) scalar field theory where classical analysis prohibits the existence of a stable spectrum. Concerning this, we calculated the effective potential up to first order in the couplings in dd space-time dimensions. We find that a Hermitian effective theory is instable while a non-Hermitian but PT\mathcal{PT}-symmetric effective theory characterized by a pure imaginary vacuum condensate is stable (bounded from below) which is against the classical predictions of the instability of the theory. We assert that the work presented here represents the first calculations that advocates the stability of the (−ϕ6)(-\phi^{6}) scalar potential.Comment: 21pages, 12 figures. In this version, we updated the text and added some figure

    Scalable Similarity Search for Molecular Descriptors

    Full text link
    Similarity search over chemical compound databases is a fundamental task in the discovery and design of novel drug-like molecules. Such databases often encode molecules as non-negative integer vectors, called molecular descriptors, which represent rich information on various molecular properties. While there exist efficient indexing structures for searching databases of binary vectors, solutions for more general integer vectors are in their infancy. In this paper we present a time- and space- efficient index for the problem that we call the succinct intervals-splitting tree algorithm for molecular descriptors (SITAd). Our approach extends efficient methods for binary-vector databases, and uses ideas from succinct data structures. Our experiments, on a large database of over 40 million compounds, show SITAd significantly outperforms alternative approaches in practice.Comment: To be appeared in the Proceedings of SISAP'1

    Understanding Conditional Associations between ToxCast in Vitro Readouts and the Hepatotoxicity of Compounds Using Rule-Based Methods

    Get PDF
    Current in vitro models for hepatotoxicity commonly suffer from low detection rates due to incomplete coverage of bioactivity space. Additionally, in vivo exposure measures such as Cmax are used for hepatotoxicity screening which are unavailable early on. Here we propose a novel rule-based framework to extract interpretable and biologically meaningful multi-conditional associations to prioritize in vitro endpoints for hepatotoxicity and understand the associated physicochemical conditions. The data used in this study was derived for 673 compounds from 361 ToxCast bioactivity measurements and 29 calculated physicochemical properties against two lowest effective levels (LEL) of rodent hepatotoxicity from ToxRefDB, namely 15mg/kg/day and 500mg/kg/day. In order to achieve 80% coverage of toxic compounds, 35 rules with accuracies ranging from 96% to 73% using 39 unique ToxCast assays are needed at a threshold level of 500mg/kg/day, whereas to describe the same coverage at a threshold of 15mg/kg/day 20 rules with accuracies of between 98% and 81% were needed, comprising 24 unique assays. Despite the 33-fold difference in dose levels, we found relative consistency in the key mechanistic groups in rule clusters, namely i) activities against Cytochrome P, ii) immunological responses, and iii) nuclear receptor activities. Less specific effects, such as oxidative stress and cell cycle arrest, were used more by rules to describe toxicity at the level of 500mg/kg/day. Although the endocrine disruption through nuclear receptor activity formulated an essential cluster of rules, this bioactivity is not covered in four commercial assay setups for hepatotoxicity. Using an external set of 29 drugs with drug-induced liver injury (DILI) labels, we found promiscuity over important assays discriminates between compounds with different levels of liver injury. In vitro-in vivo associations were also improved by incorporating physicochemical properties especially for the potent, 15mg/kg/day toxicity level, as well for assays describing nuclear receptor activity and phenotypic changes. The most frequently used physicochemical properties, predictive for hepatotoxicity in combination with assay activities, are linked to bioavailability, which were the number of rotatable bonds (less than 7) at a of level of 15mg/kg/day, and the number of rings (of less than 3) at level of 500mg/kg/day. In summary, hepatotoxicity cannot very well be captured by single assay endpoints, but better by a combination of bioactivities in relevant assays, with the likelihood of hepatotoxicity increasing with assay promiscuity. Together these findings can be used to prioritize assay combinations which are appropriate to assess potential hepatotoxicity

    Maximizing gain in high-throughput screening using conformal prediction

    Get PDF
    Iterative screening has emerged as a promising approach to increase the efficiency of screening campaigns compared to traditional high throughput approaches. By learning from a subset of the compound library, inferences on what compounds to screen next can be made by predictive models, resulting in more efficient screening. One way to evaluate screening is to consider the cost of screening compared to the gain associated with finding an active compound. In this work, we introduce a conformal predictor coupled with a gain-cost function with the aim to maximise gain in iterative screening. Using this setup we were able to show that by evaluating the predictions on the training data, very accurate predictions on what settings will produce the highest gain on the test data can be made. We evaluate the approach on 12 bioactivity datasets from PubChem training the models using 20% of the data. Depending on the settings of the gain-cost function, the settings generating the maximum gain were accurately identified in 8–10 out of the 12 datasets. Broadly, our approach can predict what strategy generates the highest gain based on the results of the cost-gain evaluation: to screen the compounds predicted to be active, to screen all the remaining data, or not to screen any additional compounds. When the algorithm indicates that the predicted active compounds should be screened, our approach also indicates what confidence level to apply in order to maximize gain. Hence, our approach facilitates decision-making and allocation of the resources where they deliver the most value by indicating in advance the likely outcome of a screening campaign.The research at Swetox (UN) was supported by Knut and Alice Wallenberg Foundation and Swedish Research Council FORMAS. AMA was supported by AstraZeneca

    In vivo imaging of neuromelanin in Parkinson's disease using 18F-AV-1451 PET.

    Get PDF
    The tau tangle ligand (18)F-AV-1451 ((18)F-T807) binds to neuromelanin in the midbrain, and may therefore be a measure of the pigmented dopaminergic neuronal count in the substantia nigra. Parkinson's disease is characterized by progressive loss of dopaminergic neurons. Extrapolation of post-mortem data predicts that a ∼30% decline of nigral dopamine neurons is necessary to cause motor symptoms in Parkinson's disease. Putamen dopamine terminal loss at disease onset most likely exceeds that of the nigral cell bodies and has been estimated to be of the order of 50-70%. We investigated the utility of (18)F-AV-1451 positron emission tomography to visualize the concentration of nigral neuromelanin in Parkinson's disease and correlated the findings to dopamine transporter density, measured by (123)I-FP-CIT single photon emission computed tomography. A total of 17 patients with idiopathic Parkinson's disease and 16 age- and sex-matched control subjects had (18)F-AV-1451 positron emission tomography using a Siemens high-resolution research tomograph. Twelve patients with Parkinson's disease also received a standardized (123)I-FP-CIT single photon emission computed tomography scan at our imaging facility. Many of the patients with Parkinson's disease displayed visually apparent decreased (18)F-AV-1451 signal in the midbrain. On quantitation, patients showed a 30% mean decrease in total nigral (18)F-AV-1451 volume of distribution compared with controls (P = 0.004), but there was an overlap of the individual ranges. We saw no significant correlation between symptom dominant side and contralateral nigral volume of distribution. There was no correlation between nigral (18)F-AV-1451 volume of distribution and age or time since diagnosis. In the subset of 12 patients, who also had a (123)I-FP-CIT scan, the mean total striatal dopamine transporter signal was decreased by 45% and the mean total (18)F-AV-1451 substantia nigra volume of distribution was decreased by 33% after median disease duration of 4.7 years (0.5-12.4 years). (18)F-AV-1451 positron emission tomography may be the first radiotracer to reflect the loss of pigmented neurons in the substantia nigra of parkinsonian patients. The magnitude of the nigral signal loss was smaller than the decrease in striatal dopamine transporter signal measured by dopamine transporter single photon emission computed tomography. These findings suggest a more severe loss of striatal nerve terminal function compared with neuronal cell bodies, in accordance with the post-mortem literature

    Information-Derived Mechanistic Hypotheses for Structural Cardiotoxicity

    Get PDF
    Adverse events resulting from drug therapy can be a cause of drug withdrawal, reduced and or restricted clinical use, as well as a major economic burden for society. To increase the safety of new drugs, there is a need to better understand the mechanisms causing the adverse events. One way to derive new mechanistic hypotheses is by linking data on drug adverse events with the drugs’ biological targets. In this study, we have used data mining techniques and mutual information statistical approaches to find associations between reported adverse events collected from the FDA Adverse Event Reporting System and assay outcomes from ToxCast, with the aim to generate mechanistic hypotheses related to structural cardiotoxicity (morphological damage to cardiomyocytes and/or loss of viability). Our workflow identified 22 adverse event-assay outcome associations. From these associations, 10 implicated targets could be substantiated with evidence from previous studies reported in the literature. For two of the identified targets, we also describe a more detailed mechanism, forming putative adverse outcome pathways associated with structural cardiotoxicity. Our study also highlights the difficulties deriving these type of associations from the very limited amount of data available

    Evaluation of machine-learning methods for ligand-based virtual screening

    Get PDF
    Machine-learning methods can be used for virtual screening by analysing the structural characteristics of molecules of known (in)activity, and we here discuss the use of kernel discrimination and naive Bayesian classifier (NBC) methods for this purpose. We report a kernel method that allows the processing of molecules represented by binary, integer and real-valued descriptors, and show that it is little different in screening performance from a previously described kernel that had been developed specifically for the analysis of binary fingerprint representations of molecular structure. We then evaluate the performance of an NBC when the training-set contains only a very few active molecules. In such cases, a simpler approach based on group fusion would appear to provide superior screening performance, especially when structurally heterogeneous datasets are to be processed
    • …
    corecore